Two key obstacles in biomedical relation extraction (RE) are the scarcity of annotations and the prevalence of instances without explicitly pre-defined labels due to low annotation coverage. Existing approaches, which treat biomedical RE as a multi-class classification task, often result in poor generalization in low-resource settings and do not have the ability to make selective prediction on unknown cases but give a guess from seen relations, hindering the applicability of those approaches. We present NBR, which converts biomedical RE as natural language inference formulation through indirect supervision. By converting relations to natural language hypotheses, NBR is capable of exploiting semantic cues to alleviate annotation scarcity. By incorporating a ranking-based loss that implicitly calibrates abstinent instances, NBR learns a clearer decision boundary and is instructed to abstain on uncertain instances. Extensive experiments on three widely-used biomedical RE benchmarks, namely ChemProt, DDI and GAD, verify the effectiveness of NBR in both full-set and low-resource regimes. Our analysis demonstrates that indirect supervision benefits biomedical RE even when a domain gap exists, and combining NLI knowledge with biomedical knowledge leads to the best performance gains.
translated by 谷歌翻译
We propose EM-PASTE: an Expectation Maximization(EM) guided Cut-Paste compositional dataset augmentation approach for weakly-supervised instance segmentation using only image-level supervision. The proposed method consists of three main components. The first component generates high-quality foreground object masks. To this end, an EM-like approach is proposed that iteratively refines an initial set of object mask proposals generated by a generic region proposal method. Next, in the second component, high-quality context-aware background images are generated using a text-to-image compositional synthesis method like DALL-E. Finally, the third component creates a large-scale pseudo-labeled instance segmentation training dataset by compositing the foreground object masks onto the original and generated background images. The proposed approach achieves state-of-the-art weakly-supervised instance segmentation results on both the PASCAL VOC 2012 and MS COCO datasets by using only image-level, weak label information. In particular, it outperforms the best baseline by +7.4 and +2.8 mAP0.50 on PASCAL and COCO, respectively. Further, the method provides a new solution to the long-tail weakly-supervised instance segmentation problem (when many classes may only have few training samples), by selectively augmenting under-represented classes.
translated by 谷歌翻译
随着智能设备产生的数据快速增长以及物联网(IoT)时代的处理需求的指数激增,资源丰富的云中心已被用来应对这些挑战。为了减轻云中心的负担,边缘云计算卸载成为一个有前途的解决方案,因为通过将计算任务从云到边缘设备缩小计算任务可以改善性能和服务质量(QOS),从而缩短了数据源和计算之间的接近度。已经提出了几种Edge-Cloud计算卸载的优化模型,以考虑计算成本和异质通信成本。但是,没有共同考虑几个重要因素,例如任务的异质性,节点之间的负载平衡以及计算任务所产生的利润,这导致了本文提出的PECCO的利润和面向成本的计算。考虑到该模型本质上很难并且优化目标是无可分析的,我们提出了改进的蛾式优化器PECCO-MFI,该pecco-MFI解决了原始的moth-flame优化器的某些缺陷,并将其集成在边缘环境下。在优化边缘云环境下提议的任务卸载模型时,进行了全面的实验,以验证所提出的方法的出色性能。
translated by 谷歌翻译
培训计算机视觉模型通常需要在各种场景配置和属性集中收集和标记大量图像。这个过程非常耗时,并且要确保捕获的数据分布映射到应用程序方案的目标域,这是一项挑战。最近,综合数据已成为解决这两个问题的一种方式。但是,现有方法要么要求人类专家手动调整每个场景属性,要么使用几乎无法控制的自动方法;这需要渲染大量的随机数据变化,这很慢,对于目标域通常是次优的。我们介绍了第一个完全可区分的合成数据管道,该数据管道使用具有目标应用程序损耗函数的闭环中的神经辐射场(NERF)。我们的方法可以在没有人工的情况下生成数据,以最大程度地提高目标任务的准确性。我们说明了我们方法对合成和现实对象检测任务的有效性。我们还引入了一个新的“ YCB野外”数据集和基准标准,该数据集和基准为对象检测提供了一种在现实世界环境中具有多种姿势的测试方案。
translated by 谷歌翻译
对象剪切已成为有效生成大量标记的训练数据的一种有希望的方法。它涉及将前景对象掩盖在背景图像上。背景图像与对象一致时,为培训对象识别模型提供了有用的上下文信息。尽管该方法可以轻松地生成大型标记的数据,但寻找下游任务的一致上下文图像仍然是一个难以捉摸的问题。在这项工作中,我们为自动上下文图像生成的新范式提出了一个新的范式。我们方法的核心是利用上下文和语言驱动图像生成之间的相互作用。通过在代表上下文的一小部分图像上应用图像字幕方法来提供上下文的语言描述。然后,这些语言描述用于使用基于语言的DALL-E图像生成框架来生成各种上下文图像集。然后将它们与对象合成,以提供分类器的增强培训集。我们在四个对象检测数据集上证明了方法比先前的上下文图像生成方法的优势。此外,我们还强调了数据生成方法对分布和零摄像数据生成方案的组成性质。
translated by 谷歌翻译
使用知识图(KGS)增强预培训的语言模型在各种型号推理任务方面取得了成功。但是,对于给定的任务实例,kg或kg的某些部分可能没有用。虽然kg-cugmented模型经常使用注意力集中在特定的kg组件上,但仍然始终使用kg,并且从未明确教授应该使用关注机制。同时,显着性方法可以测量kg特征(例如,图形,节点,路径)对模型进行正确预测的影响,从而解释了哪个kg特征是有用的。本文探讨了可用于提高kg增强模型的性能的显着性解释。首先,我们建议创建粗(是kg有用的?)和精细(kg中的节点/路径是有用的?)显着解释。其次,为了激励基于显着的监督,我们分析了Oracle kg-angimented模型,即直接使用显着解释作为引导他们注意的额外输入。第三,我们提出Salkg,kg-ug-anded模型的框架,以从粗糙和/或罚款解释中学习。给定从任务的培训集创建的显着解释,Salkg共同列举模型来预测解释,然后通过参加预测的解释突出显示的kg功能来解决任务。在三个型号QA基准(CSQA,OBQA,Codah)和一系列KG增强模型中,我们表明Salkg可以产生相当大的性能增益 - 对CSQA的绝对改善高达2.76%。
translated by 谷歌翻译
With the increasing enrichment and development of the financial derivatives market, the frequency of transactions is also faster and faster. Due to human limitations, algorithms and automatic trading have recently become the focus of discussion. In this paper, we propose a bidirectional LSTM neural network based on an attention mechanism, which is based on two popular assets, gold and bitcoin. In terms of Feature Engineering, on the one hand, we add traditional technical factors, and at the same time, we combine time series models to develop factors. In the selection of model parameters, we finally chose a two-layer deep learning network. According to AUC measurement, the accuracy of bitcoin and gold is 71.94% and 73.03% respectively. Using the forecast results, we achieved a return of 1089.34% in two years. At the same time, we also compare the attention Bi-LSTM model proposed in this paper with the traditional model, and the results show that our model has the best performance in this data set. Finally, we discuss the significance of the model and the experimental results, as well as the possible improvement direction in the future.
translated by 谷歌翻译
More and more stock trading strategies are constructed using deep reinforcement learning (DRL) algorithms, but DRL methods originally widely used in the gaming community are not directly adaptable to financial data with low signal-to-noise ratios and unevenness, and thus suffer from performance shortcomings. In this paper, to capture the hidden information, we propose a DRL based stock trading system using cascaded LSTM, which first uses LSTM to extract the time-series features from stock daily data, and then the features extracted are fed to the agent for training, while the strategy functions in reinforcement learning also use another LSTM for training. Experiments in DJI in the US market and SSE50 in the Chinese stock market show that our model outperforms previous baseline models in terms of cumulative returns and Sharp ratio, and this advantage is more significant in the Chinese stock market, a merging market. It indicates that our proposed method is a promising way to build a automated stock trading system.
translated by 谷歌翻译
With the improvement of arithmetic power and algorithm accuracy of personal devices, biological features are increasingly widely used in personal identification, and palm vein recognition has rich extractable features and has been widely studied in recent years. However, traditional recognition methods are poorly robust and susceptible to environmental influences such as reflections and noise. In this paper, a convolutional neural network based on VGG-16 transfer learning fused attention mechanism is used as the feature extraction network on the infrared palm vein dataset. The palm vein classification task is first trained using palmprint classification methods, followed by matching using a similarity function, in which we propose the multi-task loss function to improve the accuracy of the matching task. In order to verify the robustness of the model, some experiments were carried out on datasets from different sources. Then, we used K-means clustering to determine the adaptive matching threshold and finally achieved an accuracy rate of 98.89% on prediction set. At the same time, the matching is with high efficiency which takes an average of 0.13 seconds per palm vein pair, and that means our method can be adopted in practice.
translated by 谷歌翻译
In this paper, we allocate IoT devices as resources for smart services with time-constrained resource requirements. The allocation method named as BRAD can work under multiple resource scenarios with diverse resource richnesses, availabilities and costs, such as the intelligent healthcare system deployed by Harbin Institute of Technology (HIT-IHC). The allocation aims for bimetric-balancing under the multi-scenario case, i.e., the profit and cost associated with service satisfaction are jointly optimised and balanced wisely. Besides, we abstract IoT devices as digital objects (DO) to make them easier to interact with during resource allocation. Considering that the problem is NP-Hard and the optimisation objective is not differentiable, we utilise Grey Wolf Optimisation (GWO) algorithm as the model optimiser. Specifically, we tackle the deficiencies of GWO and significantly improve its performance by introducing three new mechanisms to form the BRAD-GWA algorithm. Comprehensive experiments are conducted on realistic HIT-IHC IoT testbeds and several algorithms are compared, including the allocation method originally used by HIT-IHC system to verify the effectiveness of the BRAD-GWA. The BRAD-GWA achieves a 3.14 times and 29.6% objective reduction compared with the HIT-IHC and the original GWO algorithm, respectively.
translated by 谷歌翻译